21 research outputs found

    An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets

    Get PDF
    We prove a lower estimate on the increase in entropy when two copies of a conditional random variable X | Y, with X supported on Z_q={0,1,...,q-1} for prime q, are summed modulo q. Specifically, given two i.i.d. copies (X_1,Y_1) and (X_2,Y_2) of a pair of random variables (X,Y), with X taking values in Z_q, we show H(X_1 + X_2 mid Y_1, Y_2) - H(X|Y) >=e alpha(q) * H(X|Y) (1-H(X|Y)) for some alpha(q) > 0, where H(.) is the normalized (by factor log_2(q)) entropy. In particular, if X | Y is not close to being fully random or fully deterministic and H(X| Y) in (gamma,1-gamma), then the entropy of the sum increases by Omega_q(gamma). Our motivation is an effective analysis of the finite-length behavior of polar codes, for which the linear dependence on gamma is quantitatively important. The assumption of q being prime is necessary: for X supported uniformly on a proper subgroup of Z_q we have H(X+X)=H(X). For X supported on infinite groups without a finite subgroup (the torsion-free case) and no conditioning, a sumset inequality for the absolute increase in (unnormalized) entropy was shown by Tao in [Tao, CP&R 2010]. We use our sumset inequality to analyze Ari kan\u27s construction of polar codes and prove that for any q-ary source X, where q is any fixed prime, and anyepsilon > 0, polar codes allow efficient data compression of N i.i.d. copies of X into (H(X)+epsilon)N q-ary symbols, as soon as N is polynomially large in 1/epsilon. We can get capacity-achieving source codes with similar guarantees for composite alphabets, by factoring q into primes and combining different polar codes for each prime in factorization. A consequence of our result for noisy channel coding is that for all discrete memoryless channels, there are explicit codes enabling reliable communication within epsilon > 0 of the symmetric Shannon capacity for a block length and decoding complexity bounded by a polynomial in 1/epsilon. The result was previously shown for the special case of binary-input channels [Guruswami/Xial, FOCS\u2713; Hassani/Alishahi/Urbanke, CoRR 2013], and this work extends the result to channels over any alphabet

    A Fast Algorithm for Well-Spaced Points and Approximate Delaunay Graphs

    Get PDF
    We present a new algorithm that produces a well-spaced superset of points conforming to a given input set in any dimension with guaranteed optimal output size. We also provide an approximate Delaunay graph on the output points. Our algorithm runs in expected time O(2O(d)(nlogn+m))O(2^{O(d)}(n\log n + m)), where nn is the input size, mm is the output point set size, and dd is the ambient dimension. The constants only depend on the desired element quality bounds. To gain this new efficiency, the algorithm approximately maintains the Voronoi diagram of the current set of points by storing a superset of the Delaunay neighbors of each point. By retaining quality of the Voronoi diagram and avoiding the storage of the full Voronoi diagram, a simple exponential dependence on dd is obtained in the running time. Thus, if one only wants the approximate neighbors structure of a refined Delaunay mesh conforming to a set of input points, the algorithm will return a size 2O(d)m2^{O(d)}m graph in 2O(d)(nlogn+m)2^{O(d)}(n\log n + m) expected time. If mm is superlinear in nn, then we can produce a hierarchically well-spaced superset of size 2O(d)n2^{O(d)}n in 2O(d)nlogn2^{O(d)}n\log n expected time.Comment: Full versio

    Streaming Complexity of Approximating Max 2CSP and Max Acyclic Subgraph

    Get PDF
    We study the complexity of estimating the optimum value of a Boolean 2CSP (arity two constraint satisfaction problem) in the single-pass streaming setting, where the algorithm is presented the constraints in an arbitrary order. We give a streaming algorithm to estimate the optimum within a factor approaching 2/5 using logarithmic space, with high probability. This beats the trivial factor 1/4 estimate obtained by simply outputting 1/4-th of the total number of constraints. The inspiration for our work is a lower bound of Kapralov, Khanna, and Sudan (SODA\u2715) who showed that a similar trivial estimate (of factor 1/2) is the best one can do for Max CUT. This lower bound implies that beating a factor 1/2 for Max DICUT (a special case of Max 2CSP), in particular, to distinguish between the case when the optimum is m/2 versus when it is at most (1/4+eps)m, where m is the total number of edges, requires polynomial space. We complement this hardness result by showing that for DICUT, one can distinguish between the case in which the optimum exceeds (1/2+eps)m and the case in which it is close to m/4. We also prove that estimating the size of the maximum acyclic subgraph of a directed graph, when its edges are presented in a single-pass stream, within a factor better than 7/8 requires polynomial space

    Communication with Partial Noiseless Feedback

    Get PDF
    We introduce the notion of one-way communication schemes with partial noiseless feedback. In this setting, Alice wishes to communicate a message to Bob by using a communication scheme that involves sending a sequence of bits over a channel while receiving feedback bits from Bob for delta fraction of the transmissions. An adversary is allowed to corrupt up to a constant fraction of Alice\u27s transmissions, while the feedback is always uncorrupted. Motivated by questions related to coding for interactive communication, we seek to determine the maximum error rate, as a function of 0 <= delta <= 1, such that Alice can send a message to Bob via some protocol with delta fraction of noiseless feedback. The case delta = 1 corresponds to full feedback, in which the result of Berlekamp [\u2764] implies that the maximum tolerable error rate is 1/3, while the case delta = 0 corresponds to no feedback, in which the maximum tolerable error rate is 1/4, achievable by use of a binary error-correcting code. In this work, we show that for any delta in (0,1] and gamma in [0, 1/3), there exists a randomized communication scheme with noiseless delta-feedback, such that the probability of miscommunication is low, as long as no more than a gamma fraction of the rounds are corrupted. Moreover, we show that for any delta in (0, 1] and gamma < f(delta), there exists a deterministic communication scheme with noiseless delta-feedback that always decodes correctly as long as no more than a gamma fraction of rounds are corrupted. Here f is a monotonically increasing, piecewise linear, continuous function with f(0) = 1/4 and f(1) = 1/3. Also, the rate of communication in both cases is constant (dependent on delta and gamma but independent of the input length)

    On the Sensitivity Conjecture for Read-k Formulas

    Get PDF
    Various combinatorial/algebraic parameters are used to quantify the complexity of a Boolean function. Among them, sensitivity is one of the simplest and block sensitivity is one of the most useful. Nisan (1989) and Nisan and Szegedy (1991) showed that block sensitivity and several other parameters, such as certificate complexity, decision tree depth, and degree over R, are all polynomially related to one another. The sensitivity conjecture states that there is also a polynomial relationship between sensitivity and block sensitivity, thus supplying the "missing link". Since its introduction in 1991, the sensitivity conjecture has remained a challenging open question in the study of Boolean functions. One natural approach is to prove it for special classes of functions. For instance, the conjecture is known to be true for monotone functions, symmetric functions, and functions describing graph properties. In this paper, we consider the conjecture for Boolean functions computable by read-k formulas. A read-k formula is a tree in which each variable appears at most k times among the leaves and has Boolean gates at its internal nodes. We show that the sensitivity conjecture holds for read-once formulas with gates computing symmetric functions. We next consider regular formulas with OR and AND gates. A formula is regular if it is a leveled tree with all gates at a given level having the same fan-in and computing the same function. We prove the sensitivity conjecture for constant depth regular read-k formulas for constant k
    corecore